我们提出了一种新型神经渲染管线,混合体积纹理渲染(HVTR),其合成了从任意姿势和高质量的任意姿势的虚拟人体化身。首先,我们学会在人体表面的致密UV歧管上编码铰接的人体运动。为了处理复杂的运动(例如,自闭电),我们将基于动态姿势的神经辐射场建造关于UV歧管的编码信息来构建基于动态姿态条件的神经辐射场的3D体积表示。虽然这允许我们表示具有更改拓扑的3D几何形状,但体积渲染是计算沉重的。因此,我们仅使用姿势调节的下采样的神经辐射场(PD-NERF)使用粗糙的体积表示,我们可以以低分辨率有效地呈现。此外,我们学习2D纹理功能,这些功能与图像空间中呈现的体积功能融合。我们的方法的关键优势是,我们可以通过快速GaN的纹理渲染器将融合功能转换为高分辨率,高质量的化身。我们证明混合渲染使HVTR能够处理复杂的动作,在用户控制的姿势/形状下呈现高质量的化身,甚至松散的衣服,最重要的是,在推理时间快速。我们的实验结果还证明了最先进的定量结果。
translated by 谷歌翻译
传统的2D动画是劳动密集型的,通常需要动画师每秒手动绘制十二例证。虽然自动帧插值可以缓解这种负担,但是与在光电环境域中相比,2D动画所固有的艺术效果使视频合成特别具有挑战性。较低的帧射击导致较大的位移和闭塞,离散的感知元素(例如,线条和固色区域)对面向纹理的卷积网络构成困难,并且夸张的非线性运动阻碍了训练数据收集。以前的工作尝试解决这些问题,但使用了不可提供的方法并专注于像素完美的性能。相比之下,我们建立一个可扩展的系统,更适当地以这种艺术领域的感知质量为中心。首先,我们提出了一种轻量级架构,具有简单而有效的遮挡技术,可以提高具有较少可训练参数的感知度量的收敛性。其次,我们设计一种新颖的辅助模块,利用欧几里德距离变换来改善键线和区域结构的保存。第三,我们通过量化移动非线性来自动为此任务加倍现有的手动收集的数据集,允许我们改善模型泛化。最后,我们通过用户学习确定PSNR和SSSIM的LPIP和倒角距离,验证我们的系统对2D动画域中的感知质量的强调。
translated by 谷歌翻译
我们提出了EgoreRender,一种用于渲染由安装在盖帽或VR耳机上的可穿戴的专门鱼眼相机捕获的人的全身神经头像的系统。我们的系统使演员的质感性谱系景观和她的动作从任意虚拟相机位置。从如下视图和大型扭曲,渲染来自此类自主特征的全身头像具有独特的挑战。我们通过将渲染过程分解为几个步骤,包括纹理综合,构建和神经图像翻译来解决这些挑战。对于纹理合成,我们提出了EGO-DPNET,一个神经网络,其在输入的鱼眼图像和底层参数体模型之间倾少密集的对应,并从自我输入输入中提取纹理。此外,为了编码动态外观,我们的方法还学习隐式纹理堆栈,捕获横跨姿势和视点的详细外观变化。对于正确的姿态生成,我们首先使用参数模型从Egentric视图估算身体姿势。然后,我们通过将参数模型投影到用户指定的目标视点来综合外部释放姿势图像。我们接下来将目标姿势图像和纹理组合到组合特征图像中,该组合特征图像使用神经图像平移网络转换为输出彩色图像。实验评估表明,Egorenderer能够产生佩戴Egocentric相机的人的现实自由观点的头像。几个基线的比较展示了我们的方法的优势。
translated by 谷歌翻译
我们研究了参考游戏(一种信令游戏),其中两个代理通过离散瓶颈互相通信,以实现共同的目标。在我们的参照游戏中,扬声器的目标是撰写消息或符号表示“重要的”图像修补程序,而侦听器的任务是将扬声器的消息与相同图像的不同视图匹配。我们表明,这两个代理确实可以在不明确或隐含监督的情况下开发通信协议。我们进一步调查了开发的协议,并通过仅使用重要补丁来展示加速最近的视觉变压器的应用程序,以及用于下游识别任务的预训练(例如,分类)。代码在https://github.com/kampta/patchgame提供。
translated by 谷歌翻译
人类姿势信息是许多下游图像处理任务中的关键组成部分,例如活动识别和运动跟踪。同样地,所示字符域的姿势估计器将在辅助内容创建任务中提供有价值的,例如参考姿势检索和自动字符动画。但是,虽然现代数据驱动技术在自然图像上具有显着提高的姿态估计性能,但是对于插图来说已经完成了很少的工作。在我们的工作中,我们通过从域特定的和任务特定的源模型有效地学习来弥合这个域名差距。此外,我们还升级和展开现有的所示姿势估计数据集,并引入两个用于分类和分段子任务的新数据集。然后,我们应用所产生的最先进的角色姿势估算器来解决姿势引导例证检索的新颖任务。所有数据,模型和代码都将公开可用。
translated by 谷歌翻译
With the advent of Neural Style Transfer (NST), stylizing an image has become quite popular. A convenient way for extending stylization techniques to videos is by applying them on a per-frame basis. However, such per-frame application usually lacks temporal-consistency expressed by undesirable flickering artifacts. Most of the existing approaches for enforcing temporal-consistency suffers from one or more of the following drawbacks. They (1) are only suitable for a limited range of stylization techniques, (2) can only be applied in an offline fashion requiring the complete video as input, (3) cannot provide consistency for the task of stylization, or (4) do not provide interactive consistency-control. Note that existing consistent video-filtering approaches aim to completely remove flickering artifacts and thus do not respect any specific consistency-control aspect. For stylization tasks, however, consistency-control is an essential requirement where a certain amount of flickering can add to the artistic look and feel. Moreover, making this control interactive is paramount from a usability perspective. To achieve the above requirements, we propose an approach that can stylize video streams while providing interactive consistency-control. Apart from stylization, our approach also supports various other image processing filters. For achieving interactive performance, we develop a lite optical-flow network that operates at 80 Frames per second (FPS) on desktop systems with sufficient accuracy. We show that the final consistent video-output using our flow network is comparable to that being obtained using state-of-the-art optical-flow network. Further, we employ an adaptive combination of local and global consistent features and enable interactive selection between the two. By objective and subjective evaluation, we show that our method is superior to state-of-the-art approaches.
translated by 谷歌翻译
Vision transformers have emerged as powerful tools for many computer vision tasks. It has been shown that their features and class tokens can be used for salient object segmentation. However, the properties of segmentation transformers remain largely unstudied. In this work we conduct an in-depth study of the spatial attentions of different backbone layers of semantic segmentation transformers and uncover interesting properties. The spatial attentions of a patch intersecting with an object tend to concentrate within the object, whereas the attentions of larger, more uniform image areas rather follow a diffusive behavior. In other words, vision transformers trained to segment a fixed set of object classes generalize to objects well beyond this set. We exploit this by extracting heatmaps that can be used to segment unknown objects within diverse backgrounds, such as obstacles in traffic scenes. Our method is training-free and its computational overhead negligible. We use off-the-shelf transformers trained for street-scene segmentation to process other scene types.
translated by 谷歌翻译
The problem of generating an optimal coalition structure for a given coalition game of rational agents is to find a partition that maximizes their social welfare and is known to be NP-hard. This paper proposes GCS-Q, a novel quantum-supported solution for Induced Subgraph Games (ISGs) in coalition structure generation. GCS-Q starts by considering the grand coalition as initial coalition structure and proceeds by iteratively splitting the coalitions into two nonempty subsets to obtain a coalition structure with a higher coalition value. In particular, given an $n$-agent ISG, the GCS-Q solves the optimal split problem $\mathcal{O} (n)$ times using a quantum annealing device, exploring $\mathcal{O}(2^n)$ partitions at each step. We show that GCS-Q outperforms the currently best classical solvers with its runtime in the order of $n^2$ and an expected worst-case approximation ratio of $93\%$ on standard benchmark datasets.
translated by 谷歌翻译
Cartesian impedance control is a type of motion control strategy for robots that improves safety in partially unknown environments by achieving a compliant behavior of the robot with respect to its external forces. This compliant robot behavior has the added benefit of allowing physical human guidance of the robot. In this paper, we propose a C++ implementation of compliance control valid for any torque-commanded robotic manipulator. The proposed controller implements Cartesian impedance control to track a desired end-effector pose. Additionally, joint impedance is projected in the nullspace of the Cartesian robot motion to track a desired robot joint configuration without perturbing the Cartesian motion of the robot. The proposed implementation also allows the robot to apply desired forces and torques to its environment. Several safety features such as filtering, rate limiting, and saturation are included in the proposed implementation. The core functionalities are in a re-usable base library and a Robot Operating System (ROS) ros_control integration is provided on top of that. The implementation was tested with the KUKA LBR iiwa robot and the Franka Emika Robot (Panda) both in simulation and with the physical robots.
translated by 谷歌翻译
Active learning as a paradigm in deep learning is especially important in applications involving intricate perception tasks such as object detection where labels are difficult and expensive to acquire. Development of active learning methods in such fields is highly computationally expensive and time consuming which obstructs the progression of research and leads to a lack of comparability between methods. In this work, we propose and investigate a sandbox setup for rapid development and transparent evaluation of active learning in deep object detection. Our experiments with commonly used configurations of datasets and detection architectures found in the literature show that results obtained in our sandbox environment are representative of results on standard configurations. The total compute time to obtain results and assess the learning behavior can thereby be reduced by factors of up to 14 when comparing with Pascal VOC and up to 32 when comparing with BDD100k. This allows for testing and evaluating data acquisition and labeling strategies in under half a day and contributes to the transparency and development speed in the field of active learning for object detection.
translated by 谷歌翻译